Smart-Cities-Library-Header-1

AI and Discrimination: What Tech Companies Can Do Now

AI is already learning how to discriminate

Erica Kochi

By Erica Kochi
M
arch 15, 2018

Co-founder, UNICEF Innovation

What happens when robots take our jobs, or take on military roles, or drive our vehicles? When we ask these questions about the rapidly-expanding role of AI, there are others we’re often overlooking—like the subject of a WEF paper released this week: how do we prevent discrimination and marginalization of humans in artificial intelligence?

Machines are increasingly automating decisions. In New York City, for instance, machine learning systems have been used to decide where garbage gets collected, how many police officers to send to which neighborhoods, and whether a teacher should keep their job. These decision-making technologies bring up equally important questions.

While using technology to automate decisions isn’t a new practice, the nature of machine learning technology—its ubiquity, complexity, exclusiveness, and opaqueness can amplify long-standing problems related to discrimination. We have already seen this happen: A Google photo tagging mechanism, for instance, mistakenly categorized people as gorillas. Predictive policing tools that have been shown to amplify racial bias. And hiring platforms have prevented people with disabilities from getting jobs. The potential for machine learning systems to amplify discrimination is not going away on its own. Companies need to actively teach their technology to not discriminate.

What happens when machines learn to discriminate?

In many parts of the world, particularly in middle and low income countries, the implications of using machine learning to make decisions that fundamentally affect people’s lives—without taking adequate precautions to prevent discrimination—are likely to have far reaching, long-lasting, and potentially irreversible consequences. For example:

  • Insurance companies can now predict an individual’s future health risks. At least two private multinational insurance companies operating in Mexico today are using machine learning to figure out how they can maximize the efficiency and profitability of their operations. The obvious way to do this in the health insurance field is to get as many customers who are healthy (i.e., low cost) as possible and deter customers who are less healthy (i.e., high cost). We can easily imagine a scenario in which these multinational insurance companies, in Mexico and elsewhere, can use machine learning to mine a large variety of incidentally collected data (from shopping history, public records, demographic data, etc.) to recognize patterns associated with high-risk customers and charge those customers exorbitant and exclusionary costs for health insurance. Thus, a huge segment of the population—the poorest, sickest people—would be unable to afford insurance and deprived of access to health services.
  • In Europe, more than a dozen banks are already using micro-targeted models that leverage machine learning to “accurately forecast who will cancel service or default on their loans, and how best to intervene.” Imagine what would happen if an engineer building such a machine learning system in India were to weight variables related to income with greater importance than variables reflecting timeliness of past payments. Chances are that such an application would systematically categorize women (especially those who are further marginalized based on their caste, religion, or educational attainment) as less worthy of a mortgage loan— even if they are shown to be better at paying back their loans on time than their male counterparts—because they historically make less money than men do. While the algorithm might be “accurate” in determining, which applicants make the most money, it would overlook crucial, context-specific criteria that would contribute to a more accurate and more fair approach to deciding how to provide the often-crucial opportunities afforded by mortgage lending.

What companies can do

These scenarios tell us that while machine learning can do incredibly good things for this world, those benefits are not inevitable. We need to look closely at the ways discrimination can creep into these systems, and the ways that companies can act proactively to secure a bright future for machine learning. To that end, we’ve recommend eight steps all companies involved in machine learning can and should do to maximize the shared benefit of this game-changing technology while minimizing real risks to human rights:
  1. Improve, develop and enhance industry-specific standards for fairness and non-discrimination in machine learning, as work by the Partnership on AI, the World Wide Web Foundation, the AI Now Institute, IEEE, FATML, and others have begun to do.
  2. Enhance company governance for adherence to human rights guidelines through internal codes of conduct and incentive models (outline what qualifies as within the context at hand, and how each person involved in a discriminatory model would be considered and held responsible.)
  3. Assess wider impacts of a new AI system by mapping out risks before releasing it, throughout its lifecycle, and for each new use case of a machine learning application.
  4. Take an inclusive approach to design. Ensure diversity in development teams, and train designers and developers on human rights responsibilities.
  5. Optimize machine learning models for fairness, accountability, transparency, and editability. Include fairness criteria and participate in Open Source data and algorithm sharing.
  6. Monitor and refine algorithms: monitor machine learning model use across different contexts and communities, keep models contextually relevant, and organize human oversight.
  7. Measure, evaluate, and report. Where machine learning is used in circumstances where it interacts with the public and makes decisions that significantly affect individuals, ensure that appropriate notices are provided.
  8. Provide channels to share machine learning impact transparently. Establish open communication channels with representative groups of the people that machine learning applications can affect.

World Economic Forum Executive Chairman Klaus Schwab recently called for companies to “shape a future that works for all by putting people first, empowering them and constantly reminding ourselves that all of these new technologies are first and foremost tools made by people for people.” To do so, we need to design and use machine learning to maximize the shared benefit of this game-changing technology while minimizing real risks to human rights.


Author: Erica Kochi co-founded and co-leads UNICEF’s Innovation Unit.

Source: AI and discrimination: What tech companies can do — Quartz at Work

(Visited 159 times, 1 visits today)

Related Posts

Please Leave a Reply. Thank You.

This site uses Akismet to reduce spam. Learn how your comment data is processed.